31 research outputs found

    Robust Focal Length Computation

    Get PDF
    Problém automatického výpočtu ohniskových vzdáleností dvou kamer z odpovídajících obrázků je stále obtížnou úlohou pro komunitu 3D rekonstrukce. Pro vyřešení tohoto problému byla navržena řada metod. Má se ale za to, že žadná z nich nefunguje natolik dobře, aby mohla být použita v praktické situaci. V této tezi se proto zaměříme na úlohu výpočtu ohniskové vzdálenosti z korespondencí v obrázku, kterou považujeme za chybějící článek pro vyřešení problému. Zejména se zaměříme na existující algebraické solvery pro výpočet fundamentální matice, a na Bougnouxův vzorec, který z ní vypočte ohniskovou vzdálenost. Tyto metody prozkoumáme a zanalyzujeme jejich výkonnost. Ukážeme, že počet imaginárních odhadů, jakož i chyba odhadu ohniskové vzdálenosti klesají s rostoucím počtem použitých korespondencí. Rovněž zanalyzujeme degenerace metod a jejich efektivitu v degenerovaných situacích, stejně jako výkon existujících iteračních solverů [10,4], a navrhneme zlepšení solveru [4]. Dále provedeme analýzu problému výpočtu ohniskové vzdálenosti z hlediska algebraické geometrie. Ukážeme, že kromě Bougnouxova vzorce existují další dva vzorce pro výpočet ohniskové vzdálenosti kamery z fundamentální matice. Ukážeme, že použitím spravného z těchto vzorců se můžeme v některých případech vyhnout známé degeneraci. Konkrétně takové, kde rovina definovaná baselineou a optickou osou jedné kamery je kolmá na rovinu definovanou baselineou a optickou osou druhé kamery, a kde Bougnouxův vzorec [3] selhává. Degenerace se redukuje na případ, kdy selhávají všechny tři vzorce.The problem of automatically computing focal lengths of a pair of cameras from corre- sponding pair of images has long been a daunting task for 3D reconstruction community. A number of methods were developed, but the commonly held view is that neither of them works good enough to be used in practical situations. We focus on the particular task of computing focal lengths from the point correspondences, which we deem to be the missing link for the problem solution. We especially focus on existing algebraic solvers for computing the fundamental ma- trix and the Bougnoux formula for computing the focal lengths therefrom. We survey these methods, as well as iterative methods [10, 4] proposed as their extensions, and analyze their performance. Our results show that the number of imaginary estimates, as well as the error of the estimation, declines with growing number of correspondences used. Moreover, based on our analysis we suggest that the computation of the ratio of focal length r = f 2 /f 1 is more robust than computation of f 1 or f 2 alone. We propose an improvement to the solver of [10] based on this suggestion. We furthermore assess performance of the methods in degenerate situations, and show that for bigger levels of noise the effect of the degeneracies significantly decreases. Specifically, the degenerate case of intersecting optical axes is shown to almost vanish for realistic levels of noise. We finally analyze the problem of computing focal length from the theoretical stand- point of algebraic geometry, and give two new formulae for computing camera focal length from a fundamental matrix. We show that using the right of them might help to avoid a known degeneracy. Specifically, the degeneracy where the plane defined by the baseline and the optical axis of one camera is perpendicular to the plane defined by the baseline and optical axis of the other camera, and where Bougnoux ([3]) formula fails can in some cases be avoided. The degeneracy reduces to the case where all three formulae fail

    METRA: Scalable Unsupervised RL with Metric-Aware Abstraction

    Full text link
    Unsupervised pre-training strategies have proven to be highly effective in natural language processing and computer vision. Likewise, unsupervised reinforcement learning (RL) holds the promise of discovering a variety of potentially useful behaviors that can accelerate the learning of a wide array of downstream tasks. Previous unsupervised RL approaches have mainly focused on pure exploration and mutual information skill learning. However, despite the previous attempts, making unsupervised RL truly scalable still remains a major open challenge: pure exploration approaches might struggle in complex environments with large state spaces, where covering every possible transition is infeasible, and mutual information skill learning approaches might completely fail to explore the environment due to the lack of incentives. To make unsupervised RL scalable to complex, high-dimensional environments, we propose a novel unsupervised RL objective, which we call Metric-Aware Abstraction (METRA). Our main idea is, instead of directly covering the entire state space, to only cover a compact latent space ZZ that is metrically connected to the state space SS by temporal distances. By learning to move in every direction in the latent space, METRA obtains a tractable set of diverse behaviors that approximately cover the state space, being scalable to high-dimensional environments. Through our experiments in five locomotion and manipulation environments, we demonstrate that METRA can discover a variety of useful behaviors even in complex, pixel-based environments, being the first unsupervised RL method that discovers diverse locomotion behaviors in pixel-based Quadruped and Humanoid. Our code and videos are available at https://seohong.me/projects/metra

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Measurements of top-quark pair differential cross-sections in the eμe\mu channel in pppp collisions at s=13\sqrt{s} = 13 TeV using the ATLAS detector

    Get PDF
    corecore